Justin Schuster - Associate Podcast Producer
Gabrielle Sierra - Editorial Director and Producer
Transcript
LINDSAY:
The world has turned dangerous. Is the United States prepared to meet the new challenges it might face? In this special series from The President's Inbox, we're bringing you conversations with Washington insiders to assess whether the United States is ready for a new, more dangerous world.
Great power competition is no longer solely a geographical contest for the control of borders in sea lanes. The rise of the internet created invisible networks that connect the modern world and have given countries the ability to penetrate and disrupt their adversaries' economies and societies without firing a shot.
https://www.youtube.com/watch?v=Y-2CX0Fl1bU
ABC News:
How grave is the threat the United States is facing right now in terms of a cyber attack?
ABC News:
It's not only a grave threat, but it's an unpredictable one.
https://www.youtube.com/watch?v=SPbQQ63y_zg
Fox 2:
A report from Microsoft finds Russia, China, Iran, and North Korea have increased the use of AI to create fake content, spread disinformation, and launch cyber attacks.
https://www.youtube.com/watch?v=prsWw4q8XOM
CNN:
PRC hackers are targeting our critical infrastructure.
LINDSAY:
Artificial intelligence is now intensifying that threat. China, Russia, and other countries are using AI to probe U.S. systems, steal data, compromise infrastructure systems, and manipulate information flows at unprecedented scale. The United States, in turn, is scrambling to protect its power grids and financial networks from cyber attacks and malware while seeking to blunt digital propaganda designed to divide and deceive the public. The question remains, however, whether Washington is acting fast enough with the right tools to blunt the threat that the weaponization of AI poses to America's security, prosperity, and democracy.
From the Council on Foreign Relations, welcome to The President's Inbox. I'm Jim Lindsay. Joining me here today is Jessica Brandt, senior fellow for technology and national security here at the Council and the former Director of the Foreign Malign Influence Center in the Office of the Director of National Intelligence. Jessica, welcome to The President's Inbox.
BRANDT:
Thanks so much for having me.
LINDSAY:
Okay, Jessica, I wanted to chat with you because you know artificial intelligence. It has been the rage ever since ChatGPT burst onto the scene back in November of 2022. Obviously, there are tremendous potential upsides with AI, revolutionary breakthroughs in science, potential to produce new medical therapies. A lot of talk about how it's going to change or revolutionize the way we all work. But at the same time, AI comes with some very potential downside risks. It can be weaponized. And there's a great deal of concern that AI may make cybersecurity attacks more effective. It's going to enhance adversaries intelligence gathering operations, and it's going to make it a lot easier for other countries to, what we call, pollute the information environment with misinformation and disinformation. So maybe we can just begin with you providing sort of your overall assessment as to whether or not we really need to worry about AI or whether this is an issue that is being hyped.
BRANDT:
Yeah, it's a great question. I think there are real risks. I mean, you've pointed to the benefits, and those are real too. But there are important ways that AI can shape international geopolitical competition. And then there are also some just sort of hard risks to public safety and national security related to cyber. That's a really important one that you gestured at. Bio and then also form line influence operations as you've also pointed to. So there's a lot there. And so I'm glad we're having this conversation today.
LINDSAY:
So let's begin with the cybersecurity front. I've heard a lot about that. I mean, all of us have become used to over the years the need to protect ourselves against scams. And we've moved from having password as our password to things like two-factor authentication. People realize you're going to be careful about clicking on to links that may take you places you don't want to go or install malware. What is it about artificial intelligence that changes the cybersecurity threat?
BRANDT:
Yeah. My sense is that AI is supercharging offense and defense-
LINDSAY:
Okay.
BRANDT:
... and helping states to sort of speed up or scale up the things they're already doing in cyberspace. So on the offensive side, it's sort of a force multiplier enabling states to basically scan for vulnerabilities and find them faster to write the kinds of malicious code that would help them exploit those vulnerabilities. They also think about writing much more persuasive phishing emails in a wide variety of languages than all the ways I'm sure we'll get to around supporting influence operations. So it's not that states weren't already conducting that kind of activity. They're just sort of better able to do it faster and at a lower cost.
And then on the other side of the ledger, the same ability to parse through enormous amounts of data to spot anomalies, to identify vulnerabilities, that can have great defensive advantages as well. And so we certainly see that on the defense side. I mean, I think there's sort of this fundamental asymmetry where attackers only have to succeed once and defenders have to succeed all the time. And that isn't really changed by AI, but there's this corollary, which is that defense is more expensive than offense. And so these tools are probably going to lower costs for everybody. But if it's on the defensive side that those costs that they really matter, it's at least plausible that it may have important defensive application.
LINDSAY:
I take your point that the weaponization of AI is a good thing when you're doing it to other people.
BRANDT:
Yeah.
LINDSAY:
It's not a good thing when they're doing-
BRANDT:
Being done to you.
LINDSAY:
When being done to you. But I want to ask you a question. Is the United States more vulnerable to cybersecurity and malware attacks because we are more integrated into the internet? I know China has the great firewall. I'm not sure to what extent we've had internet penetration in Russia. Give me a sense of sort of from the 40,000-foot level what the relative vulnerabilities are.
BRANDT:
Yeah. I think there is this fundamental asymmetry between democratic and open systems and authoritarian and closed systems. And so that great firewall that you pointed to that enables China to censor the speech of millions of its citizens also enables it to identify and stop attacks much sooner than we are because our government is not sitting on the networks of the private owners and operators of infrastructure, for example. And we wouldn't want that. We shouldn't want that. I mean, when I was in government, I got asked all the time like, "Why are we hearing about this potentially from a private sector company and not from you?" We can get into this, but it's because your intelligence community is not sitting on the internet watching the constitutionally-protected speech of American citizens. We shouldn't want that. And so that asymmetry is real and that's what China is exploiting. I mean, maybe we'll have a chance to talk about the Salt Typhoon and Volt Typhoon. I think those are very-
LINDSAY:
Why don't you tell me about Salt Typhoon and Volt Typhoon right now?
BRANDT:
Sure. So while I was in government a year ago, the U.S. government identified and shared with the public that China had broken into many telecommunication systems, and basically had the ability to listen to calls, copy calls, track the movements of Americans. It really was quite a stunning intelligence.
LINDSAY:
Are they still there?
BRANDT:
I think it's not even clear that we know or would know the full scale of the problem, right? I mean, these intrusions, we think, in many cases, they've been there for years, right? And when undetected by, again, the private companies that own our sort of telecommunications infrastructure. And so it's possible that the full scale of that activity will never be known. And it's just really hard to grasp the full picture there. So we should expect that what we know is stunning and we may not know everything. And that's just one piece of the campaign. The telecommunications piece is so important. The intelligence value of what they potentially could have collected, really important. But there's a much broader campaign that's about sitting on these systems and networks that run our water systems, our transportation systems, the pipelines, the power grids. And China has pre-positioned itself on those networks.
LINDSAY:
Do we think it's just China? Do we think the Russians are doing this? The Iranians?
BRANDT:
Yeah. I think China has a particular set of goals and capabilities. And it's doing this because it worries about a Taiwan scenario, for example. In that case, it would like to be able to either use those assets to slow potentially a military mobilization or just to deter us from getting involved in the first place, right? There's a coercion piece of this because if we can't protect the homeland or if we have to think seriously about the homeland consequences of a certain action, then it may shape our decision-making. So I think it is a result of China's capabilities and also its particular goals.
LINDSAY:
Are these that you're talking about something that terrorist groups could do as well? And I asked because for the first 20 years of the 21st century, U.S. foreign policy was really worried about terrorists and what terrorists could do. When I hear people talking about AI, one of the things I hear is how it empowers the individual, which would seem to make you think that it may allow terrorists to write malicious code to become more efficient at spear phishing. But I don't know whether that's because I've read too many Robert Ludlum movies-
BRANDT:
Yeah.
LINDSAY:
... or novels. So I don't know where you come down on that.
BRANDT:
Yeah. Look, I think the widespread availability of these tools will make it possible for both less sophisticated states and non-state actors to conduct this kind of activity. I also think it's the case that the big states, the major powers, the big players, they're the ones with the most capable operators that are sitting on the most advanced models. So unclear exactly how the picture will unfold, but there's arguments on both sides of that ledger.
LINDSAY:
Let's talk about the world you worked in in intelligence. And I'm not asking you to reveal any secrets, but I've read a lot of claims that AI is going to turbocharge intelligence operations, that it's partly going to enable countries to process just vast reams of information in a way and at a speed that's unmatched, and that they're going to be much better at finding patterns in data, and that will allow countries, I guess, to act more quickly, more effectively in what have you. And obviously, that ability cuts both ways. In United States, my assumption is, even with the government shut down, people are working day and night to find ways to exploit all of the potential national security benefits of AI. But how do you think that is going to shake out? Is it going to really change intelligence operations?
BRANDT:
Yeah. I think it sort of operates on two levels. I think there's enormous promise for AI to be able to help us to synthesize and query a whole lot of information quickly. Can also make it easier just to translate documents. I mean, there's the example of the Israeli intelligence services going in and getting a whole bunch of documents on Iran's nuclear ambitions. But it takes a long time to translate tens of thousands of pages and to, by hand, sift through for insights. So just like automating some of the more manual stuff and getting faster to just a rough translation might just get us answers at speed, which can be important and very valuable to the policymakers who depend on intelligence to make decisions in real-time. So that's sort of like one level of how it can improve our ability to conduct intelligence. But then there's how it'll make it harder. And I think there's real worry that facial recognition systems, gait recognition, China has ubiquitous-
LINDSAY:
What is gait... Oh, gait recognition-
BRANDT:
Gait like-
LINDSAY:
... G-A-I-T.
BRANDT:
Yes, exactly.
LINDSAY:
How you walk.
BRANDT:
How you walk. And so ubiquitous Hikvision cameras, and, again, China sits on a lot of data and it doesn't have a lot of scruples about how-
... it uses it. And so they may be quite able actually to figure out and track the movements and identify human operators. And so it'll make the human-
LINDSAY:
Who we call spies.
BRANDT:
Yeah, spies. Oh, good old-fashion spies.
LINDSAY:
This is right out of Mission: Impossible-
BRANDT:
Yeah.
LINDSAY:
... with Tom Cruise.
BRANDT:
And so it'll make that work harder. But in my view, that work is no less important. We rely on all of the sources of intelligence to give us the best and most accurate threat picture.
LINDSAY:
Do you think countries will become more inclined to run AI-operated cyber operations, or does that pose risk in of itself?
BRANDT:
I think it's likely that they will integrate AI into their operations.
We already see that on the influence operations front, for example. So I do think that we should expect to see basically our most persistent and advanced competitors use-
... all of the tools at their disposal to achieve asymmetric advantage in this competition. There's no reason to think that they wouldn't use AI tools as well.
LINDSAY:
I'm sure if I had their officials here, they would say the United States-
BRANDT:
Yeah.
LINDSAY:
... is doing this against them. What's good for the goose is good for the gander.
Let's talk about the area that you spent the most time working on, foreign influence operations. First, just sort of paint a picture for me as you sort of look out at foreign influence operations. We've seen efforts by countries to meddle in the U.S. election. Obviously, a lot of talk in 2016 about Russian efforts. Much of what I've read about AI suggests that there's really no comparison to what countries can do today versus what the Russians did eight years ago. Technology has advanced so much with deepfakes and the ability to identify microtargets. Sort of talk to me a little bit about that change in technology and changing scene.
BRANDT:
Yeah, sure. I think there's three big trends that are really shaping the landscape and making this work much, much more complicated. The first is just a growing number of more diverse actors who are interested in conducting this kind of activity.
LINDSAY:
So who's new universe of actors?
BRANDT:
A wide variety of countries-
... and also non-state actors. Importantly, these commercial firms. I would say that's the second big trend is that-
LINDSAY:
Commercial firm.
BRANDT:
...the rise of commercial firms influence-
LINDSAY:
Okay.
BRANDT:
... for hire. And the reason that is consequential is because it makes attribution much harder. And when you're talking about the work of intelligence professionals, that work really starts and ends with attribution to a foreign actor. That's how-
LINDSAY:
Okay.
BRANDT:
... you distinguish between the constitutionally-protected speech of American citizens and a national security challenge, something we're supposed to be protecting and something we're supposed to be mitigating.
And I would say these trends overlap because the third trend is the rise of these technologies. We think a lot about generative AI and LLMs, but it's also just big data analytics and other related tools, which are enabling the commercial firms to do the work. They're further complicating attribution. And so it's making it harder to detect these operations and to figure out who's behind them. And then I think the public conversation is highly indexed on the threat of deepfakes, and I think those are real.
LINDSAY:
Okay. And deepfakes are when you can create a video of somebody doing something, but that's not them.
BRANDT:
Yeah, synthetic video, but also audio, for example. So I think this dominates the public mind. But I am at least as worried about the prospect that China, which has historically been really ham-handed in its influence operations, I mean, just not great at this, that if they can get better, they can use sentiment analysis to-
LINDSAY:
What is sentiment analysis?
BRANDT:
They can basically use big data analytics-
LINDSAY:
Okay.
BRANDT:
... to identify trends in public opinion and individual particular targets or sort of subgroups. And they can also marry that up with the ability to generate endlessly novel text so that they can just iterate their way to improving their operations. Basically, they can-
LINDSAY:
Much like a comic test material out on the-
BRANDT:
Totally or the way news headlines-
LINDSAY:
Yep.
BRANDT:
A/B testing in the social-
LINDSAY:
Okay.
BRANDT:
... media marketing, you try two headlines-
LINDSAY:
They can do it with scale.
BRANDT:
... and one that works.
LINDSAY:
Yep.
BRANDT:
So if they can use big data analytics and then just keep throwing spaghetti at the wall to figure out what works, they will get better at this. And I think that is the big worry. And so we saw earlier in the year, these documents from this Chinese firm, Galaxy, that it's exactly what I'm talking about-
LINDSAY:
Yeah.
BRANDT:
... right? I mean, they're scraping and building profiles of... I think it was more than a hundred members of Congress, but high profile Americans and using AI tools to generate more sophisticated, potentially more personalized, more persuasive content. I think that is the thing that keeps me most-
LINDSAY:
But this goes back to your point about commercial firms because the Galaxy company is a company, but it's also a company which probably isn't many steps away from the Chinese government that is sort of going out and mining these technologies. And they've done a lot of work, I think, in Asia, particularly trying to target move attitudes-
BRANDT:
Taiwan-
LINDSAY:
... in Taiwan.
BRANDT:
... and Hong Kong, yeah. So the way we know about this, this sort of reached the public domain because there were a trove of documents that a bunch of researchers at Vanderbilt University found and put on the internet. And you can see in those documents that they acknowledge having done work for the Chinese government. So it's just a very crisp example of all of these trends, right? It's new actors.
It's the hiding of your hand through the use of commercial firms. It's the use of big data analytics and marrying that with generative AI to achieve something that's more personalized and potentially persuasive.
LINDSAY:
What you're describing, Jessica, sounds like highly effective propaganda.
BRANDT:
Yeah, I think that's where we're headed, and again, a much more complicated challenge for defenders because, again, if you can't identify, you can't attribute the activity, it really limits the response. I mean, when I was in government, there was just enormous policymaker demand for like, "There's the question of authentication. Is this real or synthetic? And then attribution, who did it?" And there was enormous demand for response to the question like, "Is this real or not?" And I think ultimately, what matters most is, "Was this an effort of a foreign government and what was its purpose?"
LINDSAY:
Let's talk about how the United States should respond to this threat, which is a significant one. I mean, we live in a democracy in which the idea is the people get to choose who governs them. And that rests on a presumption that they're going to have access to really good information, but with misinformation and disinformation campaigns that are waged by foreign governments that pollutes the information environment. On the other hand, I think a lot of people are concerned that if the government sets itself up trying to determine what is good information coming from good sources versus not, that you are infringing upon First Amendment rights that people who are simply exercising their First Amendment rights have been swept up or banned or canceled. So how does the government go about doing this in a way that is both effective and is respectful of the rights of American citizens?
BRANDT:
Yeah. It's such an important question. I mean, I think we cannot lose the forest for the trees. What we're trying to do is protect democracy. And with that, the rights to expression, privacy for every American that are inherently a part of that. And so I think the appropriate role of government is to reveal the hidden hand of the foreign adversary, and as appropriate, share that information with American citizens for them to make their own decisions.
LINDSAY:
How do you do that in a way that the public believes the government and what it is saying, because it's not too hard to imagine situations in which a government, whether it's Democrat or Republican, comes out and says, "This information is clearly coming from a foreign source," and people just don't believe them and believe that the government is trying to suppress their freedoms?
BRANDT:
Yeah. It's a great question, and government's done a lot of thinking about this.
So the way that this has operated is that there is a group of senior career professionals from across the interagency. They're election security leads from the various agencies, and they meet to evaluate pieces of intelligence against a set of criteria that was set out by the Trump administration during President Trump's first time in office. At a high level, the intelligence has to be credible, specific, actionable. It has to reflect behavior that is foreign in nature, malign. And we can talk about what the definition of that is.
LINDSAY:
How would you define malign?
BRANDT:
Subversive, undeclared, criminal, or coercive.
LINDSAY:
Okay.
BRANDT:
And so we also consider things like, "Would revealing this information ultimately serve the goal of the adversary?" Right? So if this was an influence operation that got very little traction, would calling attention to it ultimately serve the adversary's goals? And so this group of senior career professionals meets to evaluate intelligence on that basis and to make a decision about delivering notifications. And by the way, not all the notifications are public. They also have the ability to deliver private notifications directly to the target or the conduit of an influence operation so that they can protect themselves.
LINDSAY:
Is there any role for state government or for individuals private sector in combating this?
BRANDT:
Yeah, for everybody. I mean, it's a whole society problem that needs a whole society response. The private sector, I think, plays a very important role in providing threat intelligence reporting, because to your point, not everybody will believe U.S. government information. But I think it helps when there's sort of a chorus of voices and you get private companies sharing what they're independently seeing on their networks. And when there's harmony, I think that increases the level of trust that you can believe what you're hearing. And it also, I think very importantly, gives lots of seeds for researchers to go out and do, and I mean researchers that are academic researchers, but also investigative journalists to further the picture. And so that helps us to get a more nuanced, broader picture that we can feel confident in. I think that's important. And then, of course, election officials are on the front lines of our elections every day. They actually run our elections-
LINDSAY:
Yeah.
BRANDT:
... and are incredibly important partners in this effort. So the government needs to do its work, but we need all of the players in the ecosystem.
LINDSAY:
Is there a need for new legislation on this score, or do you think government has the necessary legislative authorities for the most part?
BRANDT:
I think what we actually need is to, in some ways, take a page out of our adversary's playbook. What they're doing is scanning the horizon, finding asymmetries that rebound to their advantage and going on offense in those places. And we need to do the same thing. So for our case, that means within the information environment, there's a bunch of things we can do to push back. But we should resist the urge to have this tit-for-tat response that I think only prolongs the competition on the adversary's terms. Like you pointed out, democracies depend on the idea that the truth is knowable, and we pollute the information environment, for example, with influence operations of our own. I think we ultimately do more harm to ourselves and to our competitors. So a better approach is to take a couple of actions within the information domain, like basically taking the persistent engagement approach from cyberspace, putting it into-
LINDSAY:
What is persistent engagement?
BRANDT:
The idea that we should be forward leaning and continuously engaging a forward leaning defense, right? And I think we can do the same in the information environment, but use truthful information to expose corruption or highlight misdeeds in ways that exploit the brittleness of authoritarian governments to truthful information about their misdeeds. So I think that's what we can do within the information domain. But literally nothing says that we have to respond within the information domain. We should use our own advantages. We've got cyber capabilities to continuously make it hard for these guys to do the work they're trying to do.
LINDSAY:
What does that mean in specifics? What would you go after?
BRANDT:
So the IRA was knocked offline for a few days around the 2018 midterms. NSA publicly-
LINDSAY:
The IRA-
BRANDT:
Oh, sorry. The Internet Research Agency, which was-
LINDSAY:
Okay.
BRANDT:
... a Russian proxy troll farm.
LINDSAY:
I thought you were talking about my pension investment-
BRANDT:
No, not that. And more recently, there have been some disclosures about hunt forward operations, working with partners to be very proactive about discovering these kinds of activities. So I think if we can take out the computer systems that they operate on or do similar kinds of activities, we should do that. We can also use sanctions. We have friends and allies. We should be sharing intelligence. So I think we don't need a single piece of legislation. We need an information strategy. Our competitors have an information strategy. We need one, and it should be focused on offsetting our adversary's asymmetric advantages by exploiting asymmetric advantages of our own.
LINDSAY:
And so as you think about this going forward, are you worried in particular about elections in election tampering?
BRANDT:
Elections are a flash point for this activity.
They're not the start and end point of this-
LINDSAY:
Okay.
BRANDT:
... activity. And I think it's really important that, of course, we protect our elections because they are a cornerstone of our democracy and so important to public trust, for sure. So I am in no way diminishing the importance of election security and election protection. I think we can't stop there. We're talking about our universities and private companies that find themselves in the crosshairs of this activity. And we had sort of built up a series of processes like the one I described to address election threats. But the aperture is quite wide and there's a much broader range of threats. And I think even at the time that I left, government was still figuring out how to mature its responses in those areas.
LINDSAY:
On that note, I'll close up this episode of The President's Inbox. My guest has been Jessica Brandt, senior fellow for technology and national security at the Council. Jessica, thank you very much for joining me.
BRANDT:
Thanks for having me.
LINDSAY:
Today's episode was produced by Justin Schuster, Molly McAnany, Markus Zakaria, and Director of Video, Jeremy Sherlick. Production assistance was provided by Oscar Berry, James Cunningham, Bryan Mendives, and Kaleah Haddock.
Show Notes
This is the fourth episode in a special series from The President’s Inbox, bringing you conversations with Washington insiders to assess whether the United States is ready for a new, more dangerous world.
Podcast with James M. Lindsay and Rebecca Lissner December 10, 2025 The President’s Inbox
Podcast with James M. Lindsay and Jonathan Hillman December 3, 2025 The President’s Inbox